"The core of education is not under threat."

AI might change the current teaching practices, but never the human need for knowledge. Credit: Jasin Goodman/unsplash

How is AI changing the face of education? Especially since the sucess of intelligent chatbots, lecturers have been asking themselves this question. John Bai is a post-doc at the Carl von Ossietzky Universität Oldenburg and is working on this topic. In this interview, he gives an optimistic outlook on the future.

In the public debate, AI in education is mainly discussed in an alarmist manner. Is AI a threat to education in the way we know it?

That's a really good way to phrase that question. Is it a threat to education? No. The core of education is teaching and learning. That's something that humans just do and will always do. However, our current teaching and learning practices will change. When people say, “Students are going to use ChatGPT to cheat” that's a legitimate fear. Cheating has been a problem for centuries.

I read something that resonated with me: ChatGPT democratizes the ability to cheat. You no longer have to find someone and pay them to write an essay for you. You can just chuck it into ChatGPT. So this is going to change the way we use written assessments. The days of “You're just going to take this question, write an essay and give it to me. Then I'm going to determine whether you know something or not” might be over.

The practices will change. Nevertheless, the core of education is not under threat.

Our interview partner John Bai

John Bai

John Y. H. Bai is a Post-doctoral Researcher in the Faculty of Education and Social Sciences at the Carl von Ossietzky University of Oldenburg. After completing his PhD in Experimental Psychology at the University of Auckland, New Zealand, he worked as a Research Fellow and then as a Lecturer at the same institution. Currently, in collaboration with the international team at the Center for Open Education Research (COER), he is working on the project “Prospects for the Future of Learning: Artificial Intelligence Applications in Higher Education”, funded by the Volkswagen Foundation and The Lower Saxony Ministry of Science and Culture.

This international project collates the perspectives of higher education faculty across six countries/regions and aims to contribute to the interdisciplinary and multi-stakeholder discussion on AI and society, in line with the recently released UNESCO (2021) recommendation on the Ethics of Artificial Intelligence.

What could replace written assessments?

This is a question that a lot of people are trying to solve. We're using written assessments because they're convenient, right? They're convenient to give out. They're convenient to mark. But they've always been a proxy measure. “I’m going to give you an assignment. You write about it. From what you've written, I make a judgment about whether you understand this concept or not”.

However, that core idea of “do you understand this concept” can be accessed in a lot of different ways. There's a lot of hype and a lot of really good work actually in AIEd about more naturalistic or more valid measures of knowledge. Can you develop these intelligent tutoring systems, which based on your performance and your interactions, can give me a better indication of whether or not you really understand the concept?

Could you explain it a little more in detail? Especially the last part. What would that look like?

The general idea is that you want to mimic a one-on-one tutor. So, you have a knowledge structure about “these are the concepts that the student should learn”. You have a progression of tasks that the student does. Right? Based on their performance in these tasks, the intelligent tutoring system will say: “Oh yeah, it looks like you've understood this. We're going to move you on to the next bit.” Or, “it looks like you're struggling with this. Here's another exercise that will help you understand this in a different way”.

That's the dream for these intelligent tutoring systems. Kind of a more accurate or valid way of assessing.

You don't really hear stories about futures where nothing goes wrong. Because they make for really boring stories.

We are in this very interesting age where everything is changing. And people are getting nervous. Can you give any orientation where this voyage will go?

I think it's really easy to default to a dystopian future. I had this interview where one of our participants said: "You don't really hear stories about futures where nothing goes wrong. Because they make for really boring stories.” So the things that we're used to ingesting are like Terminator. Or, you know, all sorts of...

The robots are always coming for us.

Exactly! I think a lot of our expectations about the future get shaped by these kinds of fictional depictions of the future that we see. But at the same time, there are probably thousands of researchers working hard at developing tools that make for a more inclusive future. Just to counter that kind of very dystopian negative perception, there's actually a lot of good that can come from us redefining what we want from the future. I don't know if I can sketch you a utopic future but I think if we're careful and systematic about how we develop these tools for the purposes of education, I think we can improve education, improve student outcomes, improve scalability and reach. That's the goal. There's avoiding the dystopic but there’s also moving towards things that we really want. For that, we have to have this conversation about what do we actually really want from higher education?

Because we know that teaching involves a lot of these like administrative tasks, a lot of assessment for the sake of assessment. That's maybe not so reflective about how that benefits the student.

More about the TEACH conference

The TEACH conference

Explore, Exchange, Excel – our third edition of the conference Teaching Across Communities at Helmholtz is happening in December 2023!

At the TEACH conference, teaching personnel, researchers, but also training coordinators and personnel developers come together to exchange on best practices, methods and experiences in teaching, as well as share resources on the whole education life cycle. 

This year’s TEACH conference intends to explore topics that will increasingly influence the way we educate and train people in the Helmholtz Association in the future – be it PhD candidates, postdocs, or established scientists. Thus,  the conference this year focuses on the role of AI in education, mental health, and online vs. offline training settings.

As a kick off, Dr.John Bai (COER - Center for Open Education Research, Carl von Ossietzky Universität Olderburg) will give a keynote on artificial intelligence application in higher education (Abstract).

Now how should we approach the topic of AI in education: Careful? Optimistic?

I don't think there's an explicit should-should not. I think different people are going to approach it in different ways. That diversity of approaches and the conversation between those different views is going to be an important way of shaping the future of AI in education. What is useful to us is the next question. Because caution is useful to a certain extent, so is excitement that you get when you're thinking about how you can use AI. That's really useful as well.

I think the most useful thing has been curiosity. If you're curious about how AI works or how you might be able to use it in your teaching or your learning, you go out and find some information about it. You know, that thing that compels you to learn. The core root of curiosity. That’s the approach that I prefer for myself.

You will hold a keynote speech at our “TEACH” conference. What can participants expect from your keynote?

The working title for the keynote presentation is "A primer on AI in education". I'm hoping to introduce people to the scope. How wide is this research field? Also, zoom in and look at a couple of applications in some more depth. Then, from that example, draw out some generalizable things that we can take away from how AI research is progressing. It's mainly focused on the academic research on AIEd. But you can approach AIEd in so many different ways. I've read legal arguments, I've read corporate reports. There are so many different ways but as an academic, I'm going to approach it like an academic.

We're going to mainly talk about an updated systematic review of the AIEd literature, then zoom in on a few examples and hopefully build a discussion from that. Hopefully, we’ll leave plenty of this discussion time for the audience to chip in, discuss, argue, and generate some interesting questions.

The interview was conducted by Laila Oudray

Alternativ-Text

Subscribe newsletter